Safeguarding Support Vector Machines against Data-Poisoning Attacks
- Tech Stack: Adversarial Machine Learning, SVM Security, Data Poisoning Detection, Threat Modeling, Defensive AI.
- Github URL: Project Link
- Timeline: Jan 2024 - May 2024
This research explores the susceptibility of Support Vector Machine (SVM) models to data poisoning attacks, which can skew the model's learning process and lead to incorrect outputs.
The study aims to develop sophisticated defense strategies that allow these models to independently recognize and counteract these adversarial threats
By enhancing SVMs with autonomous threat detection capabilities, the research seeks to fortify them against manipulations that could compromise their decision-making integrity.
The ultimate goal is to create robust SVM models that maintain accuracy and reliability even when faced with deliberately corrupted training data.